Instructions
This animation environment for
direct optimization methods comprises three different parts:
-
Welcome
Part
Here the user can choose
an optimization strategy as well as a goal function. At the moment the
user can choose between Genetic Algorithms, Simulated Annealing and Pattern
Search which are offered in the right choice menu. After selection of one
of the 13 goal functions offered in the left choice menu a 3D- and a 2D-representation
of this function is shown below.
-
Animation
Part
This part enables the user
to observe and to control the animation. More information about the animation
part is available below.
-
Help part
The help part comprises
user instructions for our animation environment, detailed information about
the animated optimization algorithms and links to other applets and web
pages about Genetic Algorithms, Simulated Annealing and Pattern Search.
The left frame allows the user
to switch between the different parts of our animation environment.
Animation
Part
The animation environment was
specifically developed to visualize the search process of direct optimization
algorithms. At the moment the search process of Genetic Algorithms, Simulated
Annealing and Pattern Search can be visualized. The task of the optimization
algorithms is to maximize a 2-dimensional goal function. This version of
our animation environment allows to select between 13 functions. The colored
square on the right shows a density plot of the selected goal function
being the playing ground for the optimization algorithm. In the text area
on the left detailed information about the current state of the optimization
process is presented. The animation can be controlled by the following
buttons:
-
"Start"
button
The search process of the
selected optimization algorithm can be started by pressing the "Start"
button. The "Start" button is also used to induce the presentation and/or
comparison of computed optimization trajectories (for more details see
"Comparison"
button).
-
"Stop"
button
The pressing of the "Stop"
button stops a running search process. That way the user can comprehensively
analyze selected states of an animation for an arbitrary amount of time.
-
"Continue"
button
This button allows to continue
a stopped search process.
-
"Reset"
button
The "Reset" button causes
the total suspension of a started search process.
-
"Adjustment"
button
This button gives the user
the opportunity to manipulate the control parameters of the selected optimization
algorithm.
-
"Comparison"
button
This button enables the
user to display and/or to compare optimization trajectories computed by
the implemented optimization algorithms. When the "Comparison" button has
been pressed, a window appears that shows, whether optimization data regarding
the currently chosen goal function is available or not. If a new goal function
is selected all optimization data which has previously been stored is deleted.
The comparison can be started by pressing the "Start" button.
The optimized goal function
and the animated optimization algorithm can be changed using the two choice
menus in the middle of the animation part. With the scrolling lists on
the right it is possible (with a double click) to vary the shape and color
of the search points, which are painted onto the density plot.
Genetic
Algorithms
Genetic Algorithms represent
a very common and wide-spread probabilistic search method. In the following
some fundamentals of Genetic Algorithms are described. This class of algorithms
which is based on the principles of natural evolution maintains a population
P(t) = {s1t,s2t,...,snt}
of individuals sjt,
jÎ{1,...,n};
t,nÎN.
Each individual sjt is implemented as a fixed-length
binary string (a so-called chromosome) and represents a potential solution
of the optimization problem which has to be solved. In case of real parameter
optimization the binary string is locally divided into segments, each containing
the binary code of the corresponding parameter value. The upper t represents
the iteration step (actual generation). The goal of the Genetic Algorithm
is to evolve through alternation in generations towards better and better
regions of the search space. The alternation is done by application of
probabilistic genetic operators (selection, crossover, and mutation). With
selection poor solutions (individuals with low fitness) are sorted out
of a population. For that reason, this operator can be viewed as an artificial
version of natural selection (a realization of the Darwinian principle
"survival of the fittest" among string creatures). The fitness of an individual
is calculated by evaluation of the goal function of the optimization problem.
After selection the probabilistic recombination operators crossover and
mutation are applied to generate new solutions for the next generation
of individuals which is evaluated again in the subsequent iteration step.
The basic structure of a Genetic Algorithm is shown below.
procedure Genetic Algorithm;
begin
t := 0;
initialize P(t);
evaluate P(t);
while (not termination-condition) do
begin
t := t+1;
select P(t) from P(t-1);
recombine P(t);
evaluate P(t);
end;
end; |
Basic structure of a Genetic
Algorithm
The implementation of the Genetic
Algorithm used in our animation environment for the most part agrees with
the Simple Genetic Algorithm presented in Goldberg, D.E.: Genetic Algorithms
in Search, Optimization and Machine Learning; Addison-Wesley, 1989.
In order to ensure an easy and informative observation of the evolving
data structure of the Genetic Algorithm, the computed populations are plotted
separately and not altogether onto the density plot. The entire optimization
trajectory can be examined by pressing the "Comparison"
button. The little cross represents the individual with the best goal
function value of the current population. When the elitist strategy check
box is selected, the search point with the best goal function value found
in the previous population survives in any case and is put into the next
population. Otherwise it may happen that this point is not selected into
the next population and gets lost.
Simulated
Annealing
The concept of Simulated Annealing,
which also represents a direct probabilistic search algorithm, is based
on a strong analogy between the physical annealing process of solids and
the problem of solving complex combinatorial optimization problems. The
goal of the physical annealing process is to obtain low energy states of
a solid in a heat bath. To reach this goal the first step is to increase
the temperature of the heat bath to a maximum value at which the solid
melts. Then the temperature of the heat bath is decreased carefully until
the particles arrange themselves in the ground state of the solid, where
the energy of the system is minimal. The ground state of the solid is obtained
only if the maximum temperature is sufficiently high
and the subsequent cooling is done sufficiently slow.
Otherwise the solid will be frozen into a meta-stable state rather than
into the ground state where the particles are arranged in a perfect structured
lattice. In the early 1980's the concept of physical annealing was introduced
in combinatorial optimization. The equivalencies are the following:
-
Solutions in the combinatorial
optimization problem are equivalent to states of the physical system.
-
The cost of a solution is equivalent
to the energy of a state.
The basic structure of the resulting
Simulated Annealing algorithm is shown below.
procedure Simulated Annealing;
begin
initialize (xstart; T0,
M0);
k := 0;
x := xstart;
repeat
for l := 1 to Mk do
begin
generate (y from neighborhoodx);
if F(y) >= F(x) then x := y;
else
if exp((F(y)-F(x))/Tk) > random[0,1] then
x := y;
end;
k := k+1;
calculate_length_of_inner_loop(Mk);
calculate_temperature(Tk);
until stop_criterion;
end; |
xstart: starting point
k: temperature level
T0: initial temperature
Calculation of Tk: Tk := (Tk-1
- s) * m, with kÎN+;
s,mÎR
M0: number of state transitions at temperature
T0
Calculation of Mk: Mk := round(Mk-1
* exp(k*ln(p))), with kÎN+;
pÎR
F: optimized goal function
x,y: possible solutions of the optimization problem |
Basic structure of Simulated
Annealing
The Simulated Annealing algorithm
shown above represents an iterative process consisting of an inner and
an outer loop (homogeneous Simulated Annealing). Within the outer loop
the temperature T is cooled down. At temperature level Tk the
inner loop performs Mk transitions. A transition is a combined
action resulting in the transformation of a current solution into a subsequent
one. A transition consists of the following two steps:
-
calculation of a solution y
in the neighborhood of the current solution x,
-
application of the acceptance
criterion to y.
In contrary to conventional
local Hill-Climbing algorithms, which only accept better solutions, Simulated
Annealing also accepts deteriorations in cost. This is exactly the property
that enables Simulated Annealing to escape from local extreme points. At
large temperature values, large deteriorations are accepted. With a decrease
of T only smaller deteriorations are accepted. Finally, as the temperature
approaches 0, no deteriorations are accepted at all and Simulated Annealing
behaves like a conventional local search method.
For representation of the
solutions of the processed optimization problems we have retained the data
structure of the Genetic Algorithm because that way, the definition of
diverse neighboring structures becomes very easy. We have implemented three
different possibilities for computing neighboring solutions:
-
neighborhood
structure 1
Bit i of the fixed-length
binary string is flipped with probability exp(-2/i)*(1-1/sqrt(sqrt(Tk+1))).
With a decrease of temperature this kind of neighborhood structure effects,
that increasingly less significant bits of the fixed-length binary string
are flipped whereas significant ones keep unchanged.
-
neighborhood
structure 2
A randomly chosen bit of
the fixed-length binary string is flipped. A second bit is flipped with
probability 0.5. If the second bit was flipped, another bit is chosen and
flipped again with probability 0.5. This procedure stops, if a chosen bit
remains unchanged.
-
neighborhood
structure 3
Each bit of the fixed-length
binary string is flipped with the adjusted mutation probability (this neighborhood
structure works equally as mutation in Genetic Algorithms).
The radio buttons in the adjustment
window of the Simulated Annealing algorithm enable the user to select one
of the neighborhood structures presented above. When the elitist strategy
check box is selected, the inner loop starts at the best point found during
the previous temperature level. Otherwise the last point computed in the
previous inner loop is chosen as the starting point. Similar to population-based
Genetic Algorithms described above, the optimization trajectory of the
Simulated Annealing algorithm is not plotted altogether onto the density
plot during animation. To make the observation of this algorithm easy and
informative the transitions of each temperature level are plotted separately.
The sequence of the computed transitions can be recognized by following
the lines which are drawn between the single search points. The search
point with the best goal function value found during the previous temperature
level as well as all the improvements of this search points achieved at
the current temperature level are marked with a little cross.
More details about Simulated
Annealing can be found in Aarts, E.; Korst, J.: Simulated Annealing
and Boltzmann Machines; Wiley, 1990.
Pattern
Search
The Pattern
Search algorithm represents a deterministic local optimization method,
that was developed from Hooke and Jeeves in the early 60s. It is a so-called
direct search method for local parameter optimization, that exclusively
uses goal function values to guide the optimization process. The direction
of increasing (respectively decreasing) values of the goal function is
calculated according to deterministic rules without any calculation of
gradients.
The
algorithm consists of two different parts:
(1)
groping along the direction of the axes (exploration) and (2) moving forward
through extrapolation. At first the user chooses an arbitrary starting
point within the search space of the selected goal function. Additionally
the user has to select initial step sizes (for groping into the direction
of the coordinate axes x1 and x2) and an accuracy bound, which represents
the termination condition for the Pattern Search algorithm. Outgoing from
the selected starting point the algorithm searches with the predefined
step sizes better values of the goal function. In case of progress the
next point will be determined through moving forward into the direction
of the line between the starting point and the improved point. After that
grope cycle number two is performed. If no progress is made within a grope
cycle the algorithm first checks, whether a forward movement already took
place. If yes, the algorithm goes back into the state before the forward
movement. If no, the step sizes will be bisected, provided that the accuracy
bound is still smaller than the step sizes. If the step sizes get smaller
than the accuracy bound the algorithm stops. Fig. 1 shows the algorithmic
structure of the Pattern Search algorithm of Hooke and Jeeves.

Fig.
1: Algorithmic structure of the Pattern Search Method
Advantages
of the Pattern Search algorithm:
-
very good
convergence characteristics
-
requires
low memory capacity
-
easy algorithmic
structure
-
uses only
basic mathematical calculations
-
numerical
stability, because goal function values are exclusively used in comparison
operations
-
less sensitive
against inaccurately performed calculations compared to gradient-based
search methods
Disadvantages
of the Pattern Search algorithm:
-
The limitation
of the groping steps to the direction of the coordinate axes can cause
premature termination.
-
The choice
of the starting point and the initial step sizes strictly determine, which
optimum will be found. It may be a local or a global optimum.
-
If the
initial step sizes are chosen too small (or too large), the algorithm requires
many iteration cycles, until an optimum is found.
More details about the
Pattern Search of Hooke and Jeeves can be found in Hooke, R. A.; Jeeves,
T. A.: Direct Search Solution for Numerical and Statistical Problems; Journal
ACM 8, 1961, pp. 212-221.
Links
to additional information and applets about the following search algorithms
Genetic
Algorithms Simulated
Annealing Pattern
Search
Additional
information and applets concerning
Genetic Algorithms
Information:
Applets:
Additional
information and applets concerning
Simulated Annealing
Information:
Applets:
Additional
information and applets concerning
Pattern Search
Information:
Applets: