Neural network: Using genetic algorithms to train and deploy neural networks: Auto AI - Automatically determine the number of training steps according to the contraction mapping principle


(0 comments)

The question of the number of steps needed to train a neural network is answered by the convergence of genetic algorithms according to the principle of contraction mapping. Having the number of training steps does not require a hyperparametric optimization method in the constraint domain, but it is formed with arbitrary values ​​according to each problem. It is similar to unsupervised learning, there is no monitoring or adjustment process for the number of steps to train, because it exists only as a consequence, when genetic algorithms converge. That's when the network population returns to itself after a generation. This point is called the fixed point.

Metric space

During evolution, the algorithm creates a sequence of network populations. Call Pt as a population in the t-generation. The initial population will be P0. We define a metric space as follows:

Let S be a set of populations. A metric d is defined:

d (Pu, Pv) = | Eu - Ev |
In which Eu and Ev are the average errors of the populations at the u and v generations, ie Pu and Pv.
We design the population to always improve after each generation, ie Ev <Eu ∀ Pu, Pv ∈ S | u <v.

(S, d) is a metric space due to ∀ Pu, Pv, Pt ∈ S:

d (Pu, Pv) ≥ 0
d (Pu, Pv) = 0 ⇔ Pu = Pv
d (Pu, Pv) = d (Pv, Pu)
d (Pu, Pv) ≤ d (Pu, Pt) + d (Pt, Pv)

For the fourth characteristic, without losing generality we assume u ≤ v. There are three situations:
If t ≤ u
d (Pu, Pv) = Eu - Ev
d (Pt, Pv) = Et - Ev
Since t ≤ u so Et ≥ Eu, therefore Et - Ev ≥ Eu - Ev. Ie d (Pu, Pv) ≤ d (Pt, Pv). Because d (Pu, Pt) ≥ 0 so d (Pu, Pv) ≤ d (Pu, Pt) + d (Pt, Pv).
It is similar to t ≥ v.
If u <t <v,
d (Pu, Pt) = Eu - Et
d (Pt, Pv) = Et - Ev
Therefore
d (Pu, Pt) + d (Pt, Pv) = Eu - Et + Et - Ev = Eu - Ev = d (Pu, Pv)

The population sequence {Pt} corresponds 1 - 1 with the average error sequence {Et}. The sequence {Et} is a sequence of real numbers that is monotonically decreasing and bounded from below (by 0) so it converges in ℝ. Correspondingly, the sequence {Pt} converges in S.
So the metric space (S, d) is a complete metric space.

Note that the above converged population sequence is just an example that illustrates the completion of space. It iterates infinitely to a local minima. We need another convergent sequence, "much shorter" with the contraction nature of the mapping, and go to the only fixed point, which is the global optimization.

Contraction mapping

Convergence occurs at the end of evolution. At this point the optimal scheme has formed and the algorithm preserves the scheme, the genetic operations make the population change less. So we can easily ensure that the population's evolution is slowed down.

For generations u <v, we have

Ev - Ev+1 <Eu - Eu+1
or
Eu+1 - Ev+1 <Eu - Ev

Consider the mapping f: S → S | f (Pt) = Pt+1. This is evolutionary mapping. It impacts on the population of t-generation and produces the t + 1 generation population.

d (f (Pu), f (Pv)) = d (Pu+1, Pv+1) = Eu+1 - Ev+1 <Eu - Ev = d (Pu, Pv)
We can also set a coefficient q ∈ [0, 1) such that ∀ Pu, Pv, ∈ S

d (f (Pu), f (Pv)) ≤ qd (Pu, Pv)

So f is a contraction mapping.

The algorithm will converge to the fixed point. We just need to check if

f (Pt*-1) = Pt* = Pt*-1

or

Et* - Et*-1 = 0

then stop.

What is interesting is that the time of stopping the algorithm determines the number of evolutionary generations that are very different for different problems.. They only share a common feature of convergence. The traditional optimization method will be difficult when the parameter is in a wide range. The superiority of genetic algorithms is there



Local minima handling

Local minima handling is a remarkable advantage of genetic algorithms compared to other methods. Regardless of what type of local minima is, the local minima individual if any will be removed from the population. A local minima individual will not be able to reduce its error, while its age increases by 1 per generation. When its age exceeds its lifetime, it will be removed from the population, even if its error is still smaller than other young individuals. Local minimal processing not only brings single benefits, it also prevents a potential local minima scheme the local minima individual spawned. This applies to all problems using genetic algorithms, not just neural networks.

Currently unrated

Comments

There are currently no comments

New Comment

required

required (not published)

optional

required


What is 10 + 5?

required