Evolutionary Algorithms

Evolutinary algorithms (EAs) are algorithms inspired by biological system. EAs are considered to be a kind of optimization methods, but there exists more practical optimization methods, i.e., simplex method, branch and cut algorithm (Gurobi is a excellent solver), and so on. EAs are, however, still important because EAs approach multi-armed bandit problems, which are the basis of reinforecement learning and also A/B tests.

 

I tried to mathematically improve EAs, especeially Genetic Algorithms. First of all, I proposed a basic mathematical model of EAs. This model can be new or may be the same one as the cross entropy method, proposed by Reuven Rubinstein. However, this point is not important because the model is just the start point of my research.

 

In my Doctor thesis, there are three contributions. First one is a mathematical derivation of the population mechanism, which allows us to reuse past generated solutions. In the basic mathematical model, generated solutions are used only once (for crossover) and are immediately discarded like UMDA while genetic algorithms keep promising solutions as the population. Simply saying, my proposed method may be just a mathematical derivation of genetic algorithms. However, there exists one interesting and novel feature, which is that kept solutions are weighted.

 

Second one is hierarchical mechanism, which is also mathematically derived from the basic model. In EAs, random sampling is carried out in the early stage of optimization and the sampling is concentrated on promising area in the final stage. This process is called convergence and also called annealing because the process is related to that of simulated annealing (SA). Annealing is unstable or said to be one directional because if we back to random search from the concentrated search, promising solutions have to be discarded like SA. Simply saying, my proposed method, hierarchical importance sampling, is more stable than annealing. This mechanism provides the optimal solution of the 2D Ising model with 400 variables where all variables with the same spin direction is the optimal. I think this setting is the most difficult for EAs in 2D Ising models and conventional EAs cannot find the optimal solution of this problem as far as I know.

 

Third one is about the optimal convergence speed. An optimal convergence speed is also mathematically derived from the basic model. Actually, a feature of the search can be represented by entropy and linear reduction of the entropy gives the optimal convergence speed for the basic model. Surely, this convergence speed can be directly applied to reinforcement learning and also A/B test algorithms.

 

All the details are described in my Dr. thesis, which is available from the below link.

Documents

Doctoral Thesis

Takayuki Higo, "Research on the Importance Sampling Method for Evolutionary Algorithms Based on Probability Models", Doctor Thesis, Tokyo Institute of Technology, 2008.

Reserch on the Importance Sampling Method for Evolutionary Algorithms Based on Probability Models
A theoretical approach for EAs are described.
doctor_thesis.pdf
Adobe Acrobat Document 1.3 MB

Memo

This is a memo for understanding my doctoral thesis.

Adujsting the Inverse Temperature of the Boltzmann Distribution
This is a memo for understanding my doctoral thesis.
Adusting the inverse temperature.pdf
Adobe Acrobat Document 43.9 KB

Conference paper

 

Importance sampling regularization for estimation of distribution algorithms

http://dl.acm.org/citation.cfm?id=2001896

 

Hierarchical importance sampling instead of annealing

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4424464