Neural network: Using genetic algorithms to train and deploy neural networks: The probability distributions


(0 comments)

Uniform distribution is fundamental, used in most cases such as to assign hybrid rates and mutation rates in genetic algorithms. The distribution can be applied directly like that, or as a basis for a subsequent process, like browsing a tree starting from the root. A specific application is to use in embedding techniques of neural networks.
Beside that, the Standard normal distribution, especially Truncated standard normal distribution, is useful in the initialization of neural networks.
We will in turn learn about them with full source code.

Uniform distribution

Uniform distribution is the most common, the probability to yield values ​​is equal within a range of their values. Fortunately, we already have many functions available to create this distribution, including erand48(). This function returns nonnegative double-precision floating-point values uniformly distributed over the interval [0.0, 1.0). Transforming to an arbitrary range is very simple, for example creating a uniform distribution within [-1.0, 1.0):

unsigned short xsubi[3];
getrandom(xsubi, sizeof(short) * 3, 0);

double val = erand48(xsubi) * 21;



Standard normal distribution

Normal distribution is characterized by the mean value µ and the standard deviation σ. This distribution applies when we need a central value µ and other values ​​to both sides with the mean deviation to the center is σ.

The Standard normal distribution is a special case of the Normal distribution with µ = 0 and σ = 1. Standard normal distribution is practically applied in the neural network used to create the initial weights because the network at the starting point needs to have its weights distributed with an average deviation of 1 symmetry through point 0. One simple way is that one can initialize half of the weights to 1, the other half is -1. However, Standard normal distribution will be more natural in the general case

unsigned short xsubi[3];
getrandom(xsubi, sizeof(short) * 3, 0);

double u = erand48(TApplication::xsubi);
double v = erand48(TApplication::xsubi);
double val = pow(-2 * log(u), 0.5) * sin(2 * M_PI * v);



Truncated standard normal distribution

The actual application has a bit of adjustment, cutting off all values ​​deviating by more than two standard deviations, ie 2

unsigned short xsubi[3];
getrandom(xsubi, sizeof(short) * 3, 0);

double u = erand48(TApplication::xsubi);
double v = erand48(TApplication::xsubi);
double val = pow(-2 * log(u), 0.5) * sin(2 * M_PI * v);
while (abs(val) > 2) {
        
    u = erand48(TApplication::xsubi);
    v = erand48(TApplication::xsubi);
    val = pow(-2 * log(u), 0.5) * sin(2 * M_PI * v);
}
Currently unrated

Comments

There are currently no comments

New Comment

required

required (not published)

optional

required


What is 10 - 2?

required